Goto

Collaborating Authors

 training approach





Efficient and Versatile Model for Multilingual Information Retrieval of Islamic Text: Development and Deployment in Real-World Scenarios

Pavlova, Vera, Makhlouf, Mohammed

arXiv.org Artificial Intelligence

Despite recent advancements in Multilingual Information Retrieval (MLIR), a significant gap remains between research and practical deployment. Many studies assess MLIR performance in isolated settings, limiting their applicability to real-world scenarios. In this work, we leverage the unique characteristics of the Quranic multilingual corpus to examine the optimal strategies to develop an ad-hoc IR system for the Islamic domain that is designed to satisfy users' information needs in multiple languages. We prepared eleven retrieval models employing four training approaches: monolingual, cross-lingual, translate-train-all, and a novel mixed method combining cross-lingual and monolingual techniques. Evaluation on an in-domain dataset demonstrates that the mixed approach achieves promising results across diverse retrieval scenarios. Furthermore, we provide a detailed analysis of how different training configurations affect the embedding space and their implications for multilingual retrieval effectiveness. Finally, we discuss deployment considerations, emphasizing the cost-efficiency of deploying a single versatile, lightweight model for real-world MLIR applications.



Reinforcement Learning Agent for a 2D Shooter Game

Ackermann, Thomas, Spang, Moritz, Gardi, Hamza A. A.

arXiv.org Artificial Intelligence

Reinforcement learning agents in complex game environments often suffer from sparse rewards, training instability, and poor sample efficiency. This paper presents a hybrid training approach that combines offline imitation learning with online reinforcement learning for a 2D shooter game agent. We implement a multi-head neural network with separate outputs for behavioral cloning and Q-learning, unified by shared feature extraction layers with attention mechanisms. Initial experiments using pure deep Q-Networks exhibited significant instability, with agents frequently reverting to poor policies despite occasional good performance. To address this, we developed a hybrid methodology that begins with behavioral cloning on demonstration data from rule-based agents, then transitions to reinforcement learning. Our hybrid approach achieves consistently above 70% win rate against rule-based opponents, substantially outperforming pure reinforcement learning methods which showed high variance and frequent performance degradation. The multi-head architecture enables effective knowledge transfer between learning modes while maintaining training stability. Results demonstrate that combining demonstration-based initialization with reinforcement learning optimization provides a robust solution for developing game AI agents in complex multi-agent environments where pure exploration proves insufficient.


A Experiment Details

Neural Information Processing Systems

We train our models with the ADAM optimizer using a learning rate of 0.002. A.2 Data Processing We use the following features for training our models. Our approach allows for customization of actionable features and constraints on their values. We require that education level can only increase. The actionable features are: (i) education level and (ii) the number of prison rule violations reported during the sample sentence.



R-FORCE: Robust Learning for Random Recurrent Neural Networks

Zheng, Yang, Shlizerman, Eli

arXiv.org Artificial Intelligence

Random Recurrent Neural Networks (RRNN) are the simplest recurrent networks to model and extract features from sequential data. The simplicity however comes with a price; RRNN are known to be susceptible to diminishing/exploding gradient problem when trained with gradient-descent based optimization. To enhance robustness of RRNN, alternative training approaches have been proposed. Specifically, FORCE learning approach proposed a recursive least squares alternative to train RRNN and was shown to be applicable even for the challenging task of target-learning, where the network is tasked with generating dynamic patterns with no guiding input. While FORCE training indicates that solving target-learning is possible, it appears to be effective only in a specific regime of network dynamics (edge-of-chaos). We thereby investigate whether initialization of RRNN connectivity according to a tailored distribution can guarantee robust FORCE learning. We are able to generate such distribution by inference of four generating principles constraining the spectrum of the network Jacobian to remain in stability region. This initialization along with FORCE learning provides a robust training method, i.e., Robust-FORCE (R-FORCE). We validate R-FORCE performance on various target functions for a wide range of network configurations and compare with alternative methods. Our experiments indicate that R-FORCE facilitates significantly more stable and accurate target-learning for a wide class of RRNN. Such stability becomes critical in modeling multi-dimensional sequences as we demonstrate on modeling time-series of human body joints during physical movements.


Evaluation of Remote Driver Performance in Urban Environment Operational Design Domains

Hans, Ole, Walter, Benedikt, Adamy, Jürgen

arXiv.org Artificial Intelligence

Remote driving has emerged as a solution for enabling human intervention in scenarios where Automated Driving Systems (ADS) face challenges, particularly in urban Operational Design Domains (ODDs). This study evaluates the performance of Remote Drivers (RDs) of passenger cars in a representative urban ODD in Las V egas, focusing on the influence of cumulative driving experience and targeted training approaches. Using performance metrics such as efficiency, braking, acceleration, and steering, the study shows that driving experience can lead to noticeable improvements of RDs and demonstrates how experience up to 600 km correlates with improved vehicle control. In addition, driving efficiency exhibited a positive trend with increasing kilometers, particularly during the first 300 km of experience, which reaches a plateau from 400 km within a range of 0.35 to 0.42 km/min in the defined ODD. The research further compares ODD-specific training methods, where the detailed ODD training approaches attains notable advantages over other training approaches. The findings underscore the importance of tailored ODD training in enhancing RD performance, safety, and scalability for Remote Driving System (RDS) in real-world applications, while identifying opportunities for optimizing training protocols to address both routine and extreme scenarios. The study provides a robust foundation for advancing RDS deployment within urban environments, contributing to the development of scalable and safety-critical remote operation standards.